Goto

Collaborating Authors

 safety certificate


Operational Safety in Human-in-the-loop Human-in-the-plant Autonomous Systems

Banerjee, Ayan, Maity, Aranyak, Lamrani, Imane, Gupta, Sandeep K. S.

arXiv.org Artificial Intelligence

Control affine assumptions, human inputs are external disturbances, in certified safe controller synthesis approaches are frequently violated in operational deployment under causal human actions. This paper takes a human-in-the-loop human-in-the-plant (HIL-HIP) approach towards ensuring operational safety of safety critical autonomous systems: human and real world controller (RWC) are modeled as a unified system. A three-way interaction is considered: a) through personalized inputs and biological feedback processes between HIP and HIL, b) through sensors and actuators between RWC and HIP, and c) through personalized configuration changes and data feedback between HIL and RWC. We extend control Lyapunov theory by generating barrier function (CLBF) under human action plans, model the HIL as a combination of Markov Chain for spontaneous events and Fuzzy inference system for event responses, the RWC as a black box, and integrate the HIL-HIP model with neural architectures that can learn CLBF certificates. We show that synthesized HIL-HIP controller for automated insulin delivery in Type 1 Diabetes is the only controller to meet safety requirements for human action inputs.


Myopically Verifiable Probabilistic Certificates for Safe Control and Learning

Wang, Zhuoyuan, Jing, Haoming, Kurniawan, Christian, Chern, Albert, Nakahira, Yorie

arXiv.org Artificial Intelligence

This paper addresses the design of safety certificates for stochastic systems, with a focus on ensuring long-term safety through fast real-time control. In stochastic environments, set invariance-based methods that restrict the probability of risk events in infinitesimal time intervals may exhibit significant long-term risks due to cumulative uncertainties/risks. On the other hand, reachability-based approaches that account for the long-term future may require prohibitive computation in real-time decision making. To overcome this challenge involving stringent long-term safety vs. computation tradeoffs, we first introduce a novel technique termed `probabilistic invariance'. This technique characterizes the invariance conditions of the probability of interest. When the target probability is defined using long-term trajectories, this technique can be used to design myopic conditions/controllers with assured long-term safe probability. Then, we integrate this technique into safe control and learning. The proposed control methods efficiently assure long-term safety using neural networks or model predictive controllers with short outlook horizons. The proposed learning methods can be used to guarantee long-term safety during and after training. Finally, we demonstrate the performance of the proposed techniques in numerical simulations.


Warning that robot lawnmowers are killing hedgehogs: Scientists propose must-have garden gadgets come with 'safety certificates'

Daily Mail - Science & tech

Hedgehogs are increasingly being killed and injured from encounters with robot lawnmowers which have few safety features to protect wildlife, according to Oxford University scientists. Researchers conducted a series of tests with the mowers, the latest must-have garden gadget, with a view to create a'hedgehog friendly' certification so gardeners need not fear any prickly casualties when they trim the grass. To ensure no harm was caused to living hedgehogs, scientists used rubber'crash test hedgehogs' instead to see if the robot mower would turn away on encountering one of Mrs Tiggywinkle's tribe on the lawn. Hedgehogs are already in serious decline, with reasons including habitat loss, road traffic accidents, intensive agriculture, and injuries from dog bites and garden strimmers. But now mowers are adding to the threats.


Joint Synthesis of Safety Certificate and Safe Control Policy using Constrained Reinforcement Learning

Ma, Haitong, Liu, Changliu, Li, Shengbo Eben, Zheng, Sifa, Chen, Jianyu

arXiv.org Artificial Intelligence

Safety is the major consideration in controlling complex dynamical systems using reinforcement learning (RL), where the safety certificate can provide provable safety guarantee. A valid safety certificate is an energy function indicating that safe states are with low energy, and there exists a corresponding safe control policy that allows the energy function to always dissipate. The safety certificate and the safe control policy are closely related to each other and both challenging to synthesize. Therefore, existing learning-based studies treat either of them as prior knowledge to learn the other, which limits their applicability with general unknown dynamics. This paper proposes a novel approach that simultaneously synthesizes the energy-function-based safety certificate and learns the safe control policy with CRL. We do not rely on prior knowledge about either an available model-based controller or a perfect safety certificate. In particular, we formulate a loss function to optimize the safety certificate parameters by minimizing the occurrence of energy increases. By adding this optimization procedure as an outer loop to the Lagrangian-based constrained reinforcement learning (CRL), we jointly update the policy and safety certificate parameters and prove that they will converge to their respective local optima, the optimal safe policy and a valid safety certificate. We evaluate our algorithms on multiple safety-critical benchmark environments. The results show that the proposed algorithm learns provably safe policies with no constraint violation. The validity or feasibility of synthesized safety certificate is also verified numerically.